Menu Top
Complete Course of Mathematics
Topic 1: Numbers & Numerical Applications Topic 2: Algebra Topic 3: Quantitative Aptitude
Topic 4: Geometry Topic 5: Construction Topic 6: Coordinate Geometry
Topic 7: Mensuration Topic 8: Trigonometry Topic 9: Sets, Relations & Functions
Topic 10: Calculus Topic 11: Mathematical Reasoning Topic 12: Vectors & Three-Dimensional Geometry
Topic 13: Linear Programming Topic 14: Index Numbers & Time-Based Data Topic 15: Financial Mathematics
Topic 16: Statistics & Probability


Content On This Page
Binomial Experiment: Definition and Bernoulli Trials Binomial Distribution: Definition and Probability Mass Function $P(X=k) = \binom{n}{k} p^k q^{n-k}$ Mean and Variance of Binomial Distribution
Properties and Applications of Binomial Distribution


Binomial Distribution




Binomial Experiment: Definition and Bernoulli Trials


Bernoulli Trial

The foundation of the binomial distribution is a simple type of random experiment called a **Bernoulli trial**. A Bernoulli trial is a single experiment with the following characteristics:

Examples of Bernoulli Trials:


Binomial Experiment

A **binomial experiment** is a specific type of statistical experiment that consists of a fixed number of repeated, independent Bernoulli trials. For an experiment to be classified as binomial, it must satisfy the following four conditions:

  1. Fixed Number of Trials ($n$):

    The experiment consists of a predetermined, fixed number of trials, denoted by $n$. The number of trials does not change during the experiment.

  2. Two Outcomes per Trial:

    Each trial must have only two possible outcomes, which can be classified as "success" (S) or "failure" (F), as in a Bernoulli trial.

  3. Constant Probability of Success ($p$):

    The probability of obtaining a "success", denoted by $p$, must remain exactly the same from one trial to the next. Consequently, the probability of "failure", $q = 1 - p$, is also constant for all trials.

  4. Independent Trials:

    The outcome of any single trial must be independent of the outcomes of all other trials. This means knowing the result of one trial does not change the probability of success or failure on any other trial.

In a binomial experiment, the random variable of interest is typically defined as the **total number of successes** that occur over the $n$ trials. The possible values for this random variable are $0, 1, 2, \dots, n$.


Example

Example 1. Consider tossing a fair coin 5 times. Let success be getting a Head. Is this a binomial experiment?

Answer:

Given: Tossing a fair coin 5 times. Success = getting a Head.

To Determine: If this is a binomial experiment.

Solution:

We check if the experiment meets the four conditions for a binomial experiment:

  1. Fixed Number of Trials ($n$): The coin is tossed exactly 5 times. So, $n=5$, which is a fixed number. This condition is met.
  2. Two Outcomes: Each individual coin toss has only two possible outcomes: Head or Tail. We have defined getting a Head as "Success" and getting a Tail as "Failure". This condition is met.
  3. Constant Probability of Success ($p$): The coin is fair. The probability of getting a Head on any single toss is $P(\text{Head}) = 0.5$. This probability remains the same for each of the 5 tosses. So, $p=0.5$. The probability of failure is $q = 1 - 0.5 = 0.5$, which is also constant. This condition is met.
  4. Independent Trials: The outcome of one coin toss does not influence or affect the outcome of any other coin toss. Each toss is independent of the others. This condition is met.

Since all four conditions are satisfied, this experiment is a **binomial experiment**.

The random variable associated with this experiment would be the number of heads obtained in the 5 tosses. This random variable can take values $0, 1, 2, 3, 4, 5$.



Binomial Distribution: Definition and Probability Mass Function $P(X=k) = \binom{n}{k} p^k q^{n-k}$


Definition

The **Binomial Distribution** is a discrete probability distribution that models the number of "successes" in a binomial experiment. It describes the probability of obtaining exactly $k$ successes in a fixed number of $n$ independent Bernoulli trials, where each trial has the same probability of success $p$.

If a random variable $X$ represents the number of successes in a binomial experiment with $n$ trials and probability of success $p$, we say that $X$ follows a binomial distribution and denote it as $X \sim B(n, p)$. The parameters of the binomial distribution are $n$ and $p$.

The possible values for the random variable $X$ (number of successes) are integers from 0 to $n$: $0, 1, 2, \dots, n$.


Probability Mass Function (PMF)

For a discrete random variable $X$ that follows a binomial distribution with parameters $n$ and $p$, the probability of obtaining exactly $k$ successes is given by the **Binomial Probability Formula**, which is also the **Probability Mass Function (PMF)** of the binomial distribution.

The formula is:

$$P(X=k) = \binom{n}{k} p^k q^{n-k}$$

... (1)

This formula is valid for $k = 0, 1, 2, \dots, n$.

Where:


Explanation of the Binomial Probability Formula

The binomial probability formula $P(X=k) = \binom{n}{k} p^k q^{n-k}$ combines two components:

By multiplying these two components, the formula gives the total probability of obtaining $k$ successes, considering all the different ways those $k$ successes can be arranged among the $n$ trials.


Example

Example 1. A fair coin is tossed 4 times. What is the probability of getting exactly 3 heads?

Answer:

Given: Tossing a fair coin 4 times. Event of interest: getting exactly 3 heads.

To Find: The probability of getting exactly 3 heads.

Solution:

This scenario meets the conditions of a binomial experiment:

  • Fixed number of trials: $n=4$ (the coin is tossed 4 times).
  • Two outcomes per trial: Success = Head (H), Failure = Tail (T).
  • Constant probability of success: The coin is fair, so $P(\text{Head}) = p = 0.5$ for each toss. $P(\text{Tail}) = q = 1 - p = 0.5$.
  • Independent trials: The outcome of one toss does not affect the others.

We are interested in the probability of getting exactly $k=3$ successes (heads).

Using the binomial probability formula $P(X=k) = \binom{n}{k} p^k q^{n-k}$ (Formula 1):

Substitute $n=4$, $k=3$, $p=0.5$, $q=0.5$:

$$P(X=3) = \binom{4}{3} (0.5)^3 (0.5)^{4-3}$$

... (iii)

First, calculate the binomial coefficient $\binom{4}{3}$:

$$\binom{4}{3} = \frac{4!}{3!(4-3)!} = \frac{4!}{3!1!}$$

... (iv)

$$\binom{4}{3} = \frac{4 \times \cancel{3!}}{\cancel{3!} \times 1!} = \frac{4}{1} = 4$$

($4! = 4 \times 3!, 1! = 1$)

$$\binom{4}{3} = 4$$

... (v)

Now, calculate the powers of $p$ and $q$:

  • $p^k = (0.5)^3 = 0.5 \times 0.5 \times 0.5 = 0.125$.
  • $q^{n-k} = (0.5)^{4-3} = (0.5)^1 = 0.5$.

Substitute these values and the binomial coefficient back into formula (iii):

$$P(X=3) = 4 \times (0.125) \times (0.5)$$

... (vi)

$$P(X=3) = 4 \times 0.0625$$

($0.125 \times 0.5 = 0.0625$)

$$P(X=3) = 0.25$$

... (vii)

The probability of getting exactly 3 heads in 4 tosses of a fair coin is 0.25 or 1/4.




Mean and Variance of Binomial Distribution


For a random variable $X$ that follows a binomial distribution, denoted as $X \sim B(n, p)$, where $n$ is the number of trials and $p$ is the probability of success on a single trial, the expected value (mean) and variance can be calculated using straightforward formulas derived from the properties of expected value and variance.

Mean (Expected Value) of a Binomial Distribution

The mean or expected value of a binomial random variable $X$ represents the theoretical average number of successes one would expect to observe over $n$ independent trials, each with success probability $p$.

Formula for the Mean of a Binomial Distribution:

$$E(X) = \mu = np$$

... (1)

Where:

Intuition: The formula $np$ is intuitive. If you toss a fair coin ($p=0.5$) 10 times ($n=10$), you would expect, on average, $10 \times 0.5 = 5$ heads. If you roll a die ($p=1/6$ for rolling a 6) 12 times ($n=12$), you would expect, on average, $12 \times (1/6) = 2$ sixes.

Derivation Outline: The formula $E(X) = np$ can be formally derived from the definition of expected value for a discrete random variable $E(X) = \sum_{k=0}^{n} k \cdot P(X=k)$. Substituting the binomial PMF, we get $E(X) = \sum_{k=0}^{n} k \binom{n}{k} p^k q^{n-k}$. This sum involves manipulating binomial coefficients and can be shown to simplify to $np$.


Variance of a Binomial Distribution

The variance of a binomial random variable $X$ measures the spread or variability in the number of successes around the mean ($np$). It quantifies how much the actual number of successes is likely to deviate from the expected number.

Let $q = 1 - p$ be the probability of failure on a single trial.

Formula for the Variance of a Binomial Distribution:

$$Var(X) = \sigma^2 = npq$$

... (2)

Where:

Derivation Outline: The variance can be derived using the formula $Var(X) = E(X^2) - [E(X)]^2$. This requires first calculating $E(X^2) = \sum_{k=0}^{n} k^2 \binom{n}{k} p^k q^{n-k}$. This summation is more involved than the calculation for $E(X)$ and typically relies on advanced combinatorial identities or moment generating functions. The result of $E(X^2)$ simplifies to $npq + (np)^2$. Substituting this and $E(X)=np$ into the variance formula gives $Var(X) = (npq + (np)^2) - (np)^2 = npq$.


Standard Deviation of a Binomial Distribution

The standard deviation ($\sigma$) of a binomial random variable is the positive square root of its variance. It is a measure of spread in the same units as the number of successes.

Formula for the Standard Deviation of a Binomial Distribution:

$$SD(X) = \sigma = \sqrt{npq}$$

... (3)


Example

Example 1. If a fair six-sided die is rolled 12 times, what is the mean and standard deviation of the number of times a '6' appears?

Answer:

Given: Rolling a fair six-sided die 12 times. Random variable X = number of times a '6' appears.

To Find: The mean and standard deviation of X.

Solution:

This experiment meets the conditions of a binomial experiment:

  • Fixed number of trials: $n=12$.
  • Two outcomes per trial: Success = Rolling a '6', Failure = Not rolling a '6'.
  • Constant probability of success: For a fair die, $P(\text{rolling a 6}) = 1/6$. So, $p = 1/6$.
  • Constant probability of failure: $q = 1 - p = 1 - 1/6 = 5/6$.
  • Independent trials: Each roll is independent.

The random variable $X$, the number of times a '6' appears, follows a binomial distribution $X \sim B(12, 1/6)$.

Calculate the Mean (Expected Value):

Using the formula $E(X) = np$ (Formula 1):

$$E(X) = 12 \times \frac{1}{6}$$

... (iv)

$$E(X) = \frac{12}{6} = 2$$

... (v)

The mean number of times a '6' appears in 12 rolls is 2. This means, on average, you expect to roll a '6' twice.

Calculate the Variance:

Using the formula $Var(X) = npq$ (Formula 2):

$$Var(X) = 12 \times \frac{1}{6} \times \frac{5}{6}$$

... (vi)

$$Var(X) = \cancel{12}^{2} \times \frac{1}{\cancel{6}} \times \frac{5}{6} = 2 \times \frac{5}{6}$$

$$Var(X) = \frac{10}{6} = \frac{5}{3}$$

... (vii)

The variance of the number of times a '6' appears is $\frac{5}{3}$.

Calculate the Standard Deviation:

Using the formula $SD(X) = \sqrt{Var(X)}$ (Formula 3):

$$SD(X) = \sqrt{\frac{5}{3}}$$

... (viii)

We can leave the answer as $\sqrt{5/3}$ or rationalize the denominator:

$$SD(X) = \frac{\sqrt{5}}{\sqrt{3}} = \frac{\sqrt{5} \times \sqrt{3}}{\sqrt{3} \times \sqrt{3}} = \frac{\sqrt{15}}{3}$$

... (ix)

Numerically, $\sqrt{5} \approx 2.236$, $\sqrt{3} \approx 1.732$, $\sqrt{15} \approx 3.873$.

$SD(X) \approx \sqrt{1.666...} \approx 1.291$.

$$SD(X) \approx 1.291$$

... (x)

Mean = 2.

Standard Deviation = $\sqrt{5/3}$ or $\frac{\sqrt{15}}{3}$ (approximately 1.291).



Properties and Applications of Binomial Distribution


Properties of the Binomial Distribution

The binomial distribution $X \sim B(n, p)$ has several important properties:


Applications of the Binomial Distribution

The binomial distribution is one of the most frequently used discrete probability distributions in various fields because many real-world scenarios can be modeled as a series of independent Bernoulli trials. It is applicable in any situation where an experiment consists of a fixed number of independent trials, each resulting in one of two outcomes with constant probabilities.

Common Applications:

The key to applying the binomial distribution correctly is verifying that the four underlying conditions of a binomial experiment (fixed $n$, two outcomes, constant $p$, independent trials) are reasonably met by the real-world scenario.